Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ear Hear ; 44(5): 1107-1120, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37144890

RESUMO

OBJECTIVES: Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN: We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS: In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS: These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Fala , Individualidade , Ruído , Percepção da Fala/fisiologia
2.
Hear Res ; 427: 108649, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36462377

RESUMO

Cochlear implants (CIs) have evolved to combine residual acoustic hearing with electric hearing. It has been expected that CI users with residual acoustic hearing experience better speech-in-noise perception than CI-only listeners because preserved acoustic cues aid unmasking speech from background noise. This study sought neural substrate of better speech unmasking in CI users with preserved acoustic hearing compared to those with lower degree of acoustic hearing. Cortical evoked responses to speech in multi-talker babble noise were compared between 29 Hybrid (i.e., electric acoustic stimulation or EAS) and 29 electric-only CI users. The amplitude ratio of evoked responses to speech and noise, or internal SNR, was significantly larger in the CI users with EAS. This result indicates that CI users with better residual acoustic hearing exhibit enhanced unmasking of speech from background noise.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Fala , Percepção da Fala/fisiologia , Audição , Estimulação Acústica , Estimulação Elétrica
3.
Front Neurosci ; 16: 906616, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36061597

RESUMO

Auditory prostheses provide an opportunity for rehabilitation of hearing-impaired patients. Speech intelligibility can be used to estimate the extent to which the auditory prosthesis improves the user's speech comprehension. Although behavior-based speech intelligibility is the gold standard, precise evaluation is limited due to its subjectiveness. Here, we used a convolutional neural network to predict speech intelligibility from electroencephalography (EEG). Sixty-four-channel EEGs were recorded from 87 adult participants with normal hearing. Sentences spectrally degraded by a 2-, 3-, 4-, 5-, and 8-channel vocoder were used to set relatively low speech intelligibility conditions. A Korean sentence recognition test was used. The speech intelligibility scores were divided into 41 discrete levels ranging from 0 to 100%, with a step of 2.5%. Three scores, namely 30.0, 37.5, and 40.0%, were not collected. The speech features, i.e., the speech temporal envelope (ENV) and phoneme (PH) onset, were used to extract continuous-speech EEGs for speech intelligibility prediction. The deep learning model was trained by a dataset of event-related potentials (ERP), correlation coefficients between the ERPs and ENVs, between the ERPs and PH onset, or between ERPs and the product of the multiplication of PH and ENV (PHENV). The speech intelligibility prediction accuracies were 97.33% (ERP), 99.42% (ENV), 99.55% (PH), and 99.91% (PHENV). The models were interpreted using the occlusion sensitivity approach. While the ENV models' informative electrodes were located in the occipital area, the informative electrodes of the phoneme models, i.e., PH and PHENV, were based on the occlusion sensitivity map located in the language processing area. Of the models tested, the PHENV model obtained the best speech intelligibility prediction accuracy. This model may promote clinical prediction of speech intelligibility with a comfort speech intelligibility test.

4.
J Integr Neurosci ; 21(1): 29, 2022 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-35164465

RESUMO

Background: Verbal communication comprises the retrieval of semantic and syntactic information elicited by various kinds of words (i.e., parts of speech) in a sentence. Content words, such as nouns and verbs, convey essential information about the overall meaning (semantics) of a sentence, whereas function words, such as prepositions and pronouns, carry less meaning and support the syntax of the sentence. Methods: This study aimed to identify neural correlates of the differential information retrieval processes for several parts of speech (i.e., content and function words, nouns and verbs, and objects and subjects) via electroencephalography performed during English spoken-sentence comprehension in thirteen participants with normal hearing. Recently, phoneme-related information has become a potential acoustic feature to investigate human speech processing. Therefore, in this study, we examined the importance of various parts of speech over sentence processing using information about the onset time of phonemes. Results: The distinction in the strength of cortical responses in language-related brain regions provides the neurological evidence that content words, nouns, and objects are dominant compared to function words, verbs, and subjects in spoken sentences, respectively. Conclusions: The findings of this study may provide insights into the different contributions of certain types of words over others to the overall process of sentence understanding.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Eletroencefalografia , Psicolinguística , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
5.
J Neurosci Methods ; 311: 253-258, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-30389490

RESUMO

Classification of spoken word-evoked potentials is useful for both neuroscientific and clinical applications including brain-computer interfaces (BCIs). By evaluating whether adopting a biology-based structure improves a classifier's accuracy, we can investigate the importance of such structure in human brain circuitry, and advance BCI performance. In this study, we propose a semantic-hierarchical structure for classifying spoken word-evoked cortical responses. The proposed structure decodes the semantic grouping of the words first (e.g., a body part vs. a number) and then decodes which exact word was heard. The proposed classifier structure exhibited a consistent ∼10% improvement of classification accuracy when compared with a non-hierarchical structure. Our result provides a tool for investigating the neural representation of semantic hierarchy and the acoustic properties of spoken words in human brains. Our results suggest an improved algorithm for BCIs operated by decoding heard, and possibly imagined, words.


Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Reconhecimento Automatizado de Padrão/métodos , Semântica , Processamento de Sinais Assistido por Computador , Percepção da Fala/fisiologia , Adulto , Algoritmos , Eletrocorticografia , Potenciais Evocados , Humanos , Masculino , Fala , Adulto Jovem
6.
Sensors (Basel) ; 14(6): 10346-60, 2014 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-24926692

RESUMO

It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss.


Assuntos
Telefone Celular , Testes Auditivos/métodos , Aplicações da Informática Médica , Adulto , Idoso , Calibragem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Adulto Jovem
7.
Eur J Med Chem ; 38(1): 75-87, 2003 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-12593918

RESUMO

A series of 1-benzyl-3-(imidazol-1-ylmethyl)indole derivatives 35-46 were prepared under mild reaction conditions and tested for their antifungal activity. Pharmacomodulation at N(1), C(2) and C(5) of the indole ring and at the level of the alkyl chain (R(1)) was carried out starting from the corresponding 3-acylindoles 6, 7 or 3-formylindoles 11-22. Target imidazolyl compounds 35-46 were obtained in satisfactory yields by CO(2) elimination from the intermediate carbamates. All of the compounds were evaluated in vitro against two human fungal pathogens, Candida albicans (CA980001) and Aspergillus fumigatus (AF980003); amphotericin B, fluconazole and itraconazole were used as references. Seven out of 27 compounds (35b, 35e, 35g, 35h, 36a, 38a and especially 40a) exerted significant antifungal activity against C. albicans, with MIC in the range of 1-6 microg mL(-1). As regards inhibitory activity against A. fumigatus, the MIC figures of most of our compounds were in excess of 20 microg mL(-1) in contrast to the reference drugs, amphotericin B and itraconazole, whose MIC(90) and MIC(80) values were 0.14 and 0.50 microg mL(-1), respectively. The most potent compound, 45a, exhibited MIC value (8 +/- 1 microg mL(-1)) 16-fold higher than that of itraconazole.


Assuntos
Antifúngicos/síntese química , Antifúngicos/farmacologia , Derivados de Benzeno/síntese química , Imidazóis/síntese química , Indóis/síntese química , Aspergillus fumigatus/efeitos dos fármacos , Azóis/farmacologia , Derivados de Benzeno/farmacologia , Candida albicans/efeitos dos fármacos , Fumaratos/síntese química , Humanos , Imidazóis/farmacologia , Indóis/farmacologia , Testes de Sensibilidade Microbiana , Modelos Químicos , Estrutura Molecular , Nitratos/síntese química
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...